library(gplots)
library(MASS)
library(pander)
library(magrittr)
library(dplyr)
library(ggplot2)
This was created calculating the accuracy of each subject per stimuli. First stimuli where binarized based in the answer for the Fine structure emotion, with 1 if is correct and 0 when wrong. The total was divided by the total amount of stimuli or each category (nb0…nb64).
Ordered by row clusterization
Ordered by row clusterization
I ran the LDA analysis again using nb0 aand without the age.
\(Group~nb0+nb2+nb4+nb8+nb16+nb32+nb64\)
## best high low poor
## 0.203125 0.187500 0.140625 0.250000
| best | high | low | poor | Sum | |
|---|---|---|---|---|---|
| Predicted best | 13 | 0 | 3 | 0 | 16 |
| Predicted high | 0 | 12 | 3 | 0 | 15 |
| Predicted low | 3 | 2 | 9 | 0 | 14 |
| Predicted poor | 0 | 2 | 1 | 16 | 19 |
| Sum | 16 | 16 | 16 | 16 | 64 |
## [1] 0.78125
| best | high | low | poor | Sum | |
|---|---|---|---|---|---|
| Predicted best | 0.2031 | 0 | 0.04688 | 0 | 0.25 |
| Predicted high | 0 | 0.1875 | 0.04688 | 0 | 0.2344 |
| Predicted low | 0.04688 | 0.03125 | 0.1406 | 0 | 0.2188 |
| Predicted poor | 0 | 0.03125 | 0.01562 | 0.25 | 0.2969 |
| Sum | 0.25 | 0.25 | 0.25 | 0.25 | 1 |
| type | LDA1 | LDA2 | LDA3 |
|---|---|---|---|
| best | -1.915 | 0.5037 | 0.229 |
high -0.05947 -0.7754 0.1844
low -0.837 -0.04239 -0.4417
##
## Canonical Discriminant Analysis for type:
##
## CanRsq Eigenvalue Difference Percent Cumulative
## 1 0.76604 3.274204 3.0194 90.8426 90.843
## 2 0.20304 0.254766 3.0194 7.0685 97.911
## 3 0.07002 0.075292 3.0194 2.0890 100.000
##
## Class means:
##
## Can1 Can2 Can3
## best 1.915297 0.503690 -0.228964
## high 0.059472 -0.775385 -0.184380
## low 0.836966 -0.042391 0.441722
## poor -2.811735 0.314085 -0.028379
##
## raw coefficients:
## Can1 Can2 Can3
## nb0 29.32658 -1.0821 0.51118
## nb2 1.31803 -5.6461 3.58824
## nb4 -2.54738 -4.9806 -1.25982
## nb8 -1.00784 5.1086 13.00824
## nb16 2.78036 6.1064 -6.97118
## nb32 -0.78734 -8.2108 -4.07971
## nb64 0.52252 -6.2285 4.32074
##
## std coefficients:
## Can1 Can2 Can3
## nb0 1.028038 -0.037934 0.017919
## nb2 0.136166 -0.583298 0.370702
## nb4 -0.298609 -0.583838 -0.147678
## nb8 -0.114526 0.580518 1.478197
## nb16 0.378146 0.830504 -0.948125
## nb32 -0.087808 -0.915715 -0.454991
## nb64 0.051767 -0.617073 0.428069
##
## structure coefficients:
## Can1 Can2 Can3
## nb0 0.99251 -0.0058117 0.027993
## nb2 0.21560 -0.2162105 0.464289
## nb4 0.13691 -0.0946593 0.344383
## nb8 0.17109 -0.0384168 0.601644
## nb16 0.17374 -0.0584374 0.096065
## nb32 0.14170 -0.6663518 -0.106417
## nb64 -0.26811 -0.5486065 0.086451
| Can1 | Can2 | Can3 | |
|---|---|---|---|
| nb0 | 29.33 | -1.082 | 0.5112 |
| nb2 | 1.318 | -5.646 | 3.588 |
| nb4 | -2.547 | -4.981 | -1.26 |
| nb8 | -1.008 | 5.109 | 13.01 |
| nb16 | 2.78 | 6.106 | -6.971 |
| nb32 | -0.7873 | -8.211 | -4.08 |
| nb64 | 0.5225 | -6.228 | 4.321 |
| Can1 | Can2 | Can3 | |
|---|---|---|---|
| nb0 | 1.028 | -0.03793 | 0.01792 |
| nb2 | 0.1362 | -0.5833 | 0.3707 |
| nb4 | -0.2986 | -0.5838 | -0.1477 |
| nb8 | -0.1145 | 0.5805 | 1.478 |
| nb16 | 0.3781 | 0.8305 | -0.9481 |
| nb32 | -0.08781 | -0.9157 | -0.455 |
| nb64 | 0.05177 | -0.6171 | 0.4281 |
| Can1 | Can2 | Can3 | |
|---|---|---|---|
| nb0 | 0.9925 | -0.005812 | 0.02799 |
| nb2 | 0.2156 | -0.2162 | 0.4643 |
| nb4 | 0.1369 | -0.09466 | 0.3444 |
| nb8 | 0.1711 | -0.03842 | 0.6016 |
| nb16 | 0.1737 | -0.05844 | 0.09607 |
| nb32 | 0.1417 | -0.6664 | -0.1064 |
| nb64 | -0.2681 | -0.5486 | 0.08645 |
## Vector scale factor set to 7.676664
I DON’T REMEMBER IF THIS IS CORRECT I WROTE IT ON A PAPER AT THE LAB. I NEED TO CHECK THIS TOMORROW.
## Model df AIC BIC logLik Test L.Ratio p-value
## fst.lme 1 7 -631.0812 -603.4267 322.5406
## fst.lme1 2 9 -693.1272 -657.5715 355.5636 1 vs 2 66.04607 <.0001
## Linear mixed-effects model fit by maximum likelihood
## Data: d
## AIC BIC logLik
## -693.1272 -657.5715 355.5636
##
## Random effects:
## Formula: ~fts | id
## Structure: General positive-definite, Log-Cholesky parametrization
## StdDev Corr
## (Intercept) 0.107042555 (Intr)
## fts 0.002306663 -0.821
## Residual 0.074602319
##
## Fixed effects: response ~ fts + type
## Value Std.Error DF t-value p-value
## (Intercept) 0.7618405 0.021252948 319 35.84635 0.0000
## fts -0.0090226 0.000339917 319 -26.54362 0.0000
## typehigh 0.0466320 0.025392925 60 1.83642 0.0713
## typelow 0.0308292 0.025392925 60 1.21409 0.2295
## typepoor 0.0048213 0.025392925 60 0.18987 0.8501
## Correlation:
## (Intr) fts typhgh typelw
## fts -0.535
## typehigh -0.597 0.000
## typelow -0.597 0.000 0.500
## typepoor -0.597 0.000 0.500 0.500
##
## Standardized Within-Group Residuals:
## Min Q1 Med Q3 Max
## -2.6987840 -0.4721199 0.1043887 0.6175353 2.2029515
##
## Number of Observations: 384
## Number of Groups: 64
## Model df AIC BIC logLik Test L.Ratio p-value
## fst.lme1 1 9 -693.1272 -657.5715 355.5636
## fst.lme2 2 7 -565.7383 -538.0838 289.8691 1 vs 2 131.389 <.0001
Is accuracy in the response given by the class belonging and gender?
mod1 <- aov(accuracy[,1]~factor(cases$class)+factor(cases$gender))
summary(mod1)
## Df Sum Sq Mean Sq F value Pr(>F)
## factor(cases$class) 3 0.22681 0.07560 60.548 <2e-16 ***
## factor(cases$gender) 1 0.00006 0.00006 0.049 0.826
## Residuals 59 0.07367 0.00125
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
TukeyHSD(mod1)
## Tukey multiple comparisons of means
## 95% family-wise confidence level
##
## Fit: aov(formula = accuracy[, 1] ~ factor(cases$class) + factor(cases$gender))
##
## $`factor(cases$class)`
## diff lwr upr p adj
## high-best -0.06250000 -0.095529357 -0.029470643 0.0000315
## low-best -0.03515625 -0.068185607 -0.002126893 0.0327109
## poor-best -0.16015625 -0.193185607 -0.127126893 0.0000000
## low-high 0.02734375 -0.005685607 0.060373107 0.1383047
## poor-high -0.09765625 -0.130685607 -0.064626893 0.0000000
## poor-low -0.12500000 -0.158029357 -0.091970643 0.0000000
##
## $`factor(cases$gender)`
## diff lwr upr p adj
## M-F 0.001953125 -0.01572368 0.01962993 0.8257844